11 research outputs found

    Zero-bias autoencoders and the benefits of co-adapting features

    Full text link
    Regularized training of an autoencoder typically results in hidden unit biases that take on large negative values. We show that negative biases are a natural result of using a hidden layer whose responsibility is to both represent the input data and act as a selection mechanism that ensures sparsity of the representation. We then show that negative biases impede the learning of data distributions whose intrinsic dimensionality is high. We also propose a new activation function that decouples the two roles of the hidden layer and that allows us to learn representations on data with very high intrinsic dimensionality, where standard autoencoders typically fail. Since the decoupled activation function acts like an implicit regularizer, the model can be trained by minimizing the reconstruction error of training data, without requiring any additional regularization

    EmoNets: Multimodal deep learning approaches for emotion recognition in video

    Full text link
    The task of the emotion recognition in the wild (EmotiW) Challenge is to assign one of seven emotions to short video clips extracted from Hollywood style movies. The videos depict acted-out emotions under realistic conditions with a large degree of variation in attributes such as pose and illumination, making it worthwhile to explore approaches which consider combinations of features from multiple modalities for label assignment. In this paper we present our approach to learning several specialist models using deep learning techniques, each focusing on one modality. Among these are a convolutional neural network, focusing on capturing visual information in detected faces, a deep belief net focusing on the representation of the audio stream, a K-Means based "bag-of-mouths" model, which extracts visual features around the mouth region and a relational autoencoder, which addresses spatio-temporal aspects of videos. We explore multiple methods for the combination of cues from these modalities into one common classifier. This achieves a considerably greater accuracy than predictions from our strongest single-modality classifier. Our method was the winning submission in the 2013 EmotiW challenge and achieved a test set accuracy of 47.67% on the 2014 dataset

    Unsupervised relational feature learning for vision

    No full text
    This thesis contributes to the field of machine learning with a specific focus on the methods for learning relations between the inputs. Learning relationships between images is the most common primitive in vision. There are many vision tasks in which relationships across images play an important role. Some of them are motion estimation, activity recognition, stereo vision, multi-view geometry and visual odometry. Many of such tasks mainly depend on motion and disparity cues, which are inferred based on the relations across multiple image pairs. The approaches presented in this thesis mainly deal with, but are not limited to, learning of the representations for motion and depth. This thesis by articles consists of five articles which present relational feature learning models along with their applications in computer vision. In the first article, we present an approach for encoding motion in videos. To this end, we show that the detection of spatial transformations can be viewed as detection of coincidence or synchrony between the given sequence of frames and a sequence of features which are related by the transformation we wish to detect. Learning to detect synchrony is possible by introducing "multiplicative interactions'' into the hidden units of single layered sparse coding models. We show that the learned motion representations employed for the task of activity recognition achieve competitive performance on multiple benchmarks. Stereo vision is an important challenge in computer vision and useful for many applications in that field. In the second article, we extend the energy based learning models, which were previously used for motion encoding, to the context of depth perception. Given the common architecture of the models for encoding motion and depth, we show that it is possible to define a single model for learning a unified representation for both the cues. Our experimental results show that learning a combined representation for depth and motion makes it possible to achieve state-of-the-art performance at the task of 3-D activity analysis, and to perform better than the existing hand-engineered 3-D motion features. Autoencoder is a popular unsupervised learning method for learning efficient encoding for a given set of data samples. Typically, regularized autoencoders which are used to learn over-complete and sparse representations for the input data, were shown to fail on intrinsically high dimensional data like videos. In the third article, we investigate the reason for such a behavior. It can be observed that the regularized autoencoders typically learn negative hidden unit biases. We show that the learning of negative biases is the result of hidden units being responsible for both the sparsity and the representation of the input data. It is shown that, as a result, the behavior of the model resembles clustering methods which would require exponentially large number of features to model intrinsically high dimensional data. Based on this understanding, we propose a new activation function which decouples the roles of hidden layer and uses linear encoding. This allows to learn representations on data with very high intrinsic dimensionality. We also show that gating connections in the bi-linear models and the single layer models from articles one and two of this thesis can be thought of as a way to attain a linear encoding scheme which allows them to learn good representations on videos. Visual odometry is the task of inferring egomotion of a moving object from visual information such as images and videos. It can primarily be used for the task of localization and has many applications in the fields of robotics and navigation. The work in article four was motivated by the idea of using deep learning techniques, which are successful methods for many vision tasks, for visual odometry. The visual odometry task mainly requires inference of motion and depth information from visual input which can then be mapped to velocity and change in direction. We use relational feature models presented in the articles one and two for inferring a combined motion and depth representation from stereo video sequences. The combined representation is then mapped to discrete velocity and change in direction labels using convolutional neural networks. Our approach is an end-to-end deep learning-based architecture which uses a single type of computational model and learning rule. Preliminary results show that the architecture is capable of learning the mapping from input video to egomotion. Activity recognition is a challenging computer vision task with many real world applications. It is well know that it is a hard task to use computer vision research for real-time applications. In the fifth article of this thesis, we present a real-time activity recognition system based on deep learning based methods. Our approach uses energy based relational feature learning models for the computation of local motion features directly from videos. A bag-of-words over the local motion features is used for the analysis of activity in a given video sequence. We implement this system on a distributed computational platform and demonstrate its performance on the iCub robot. Using GPUs we demonstrate real time performance which makes the deployment of activity recognition systems in real world scenarios possible

    A Review of Various Manufacturing Approaches for Developing Amorphous Solid Dispersions

    No full text
    In recent years amorphous solid dispersions (ASD) has gained a tremendous response for improving the solubility of poorly water-soluble drug substances. Despite the stability challenges, various ASD commercial products have been successfully launched into the market over the last two decades. Among various manufacturing approaches, hot melt extrusion (HME) and spray drying techniques have attracted industries attributing to their simple manufacturing processes. In addition, KinetisolĂ’, a solvent-free approach, is also being most widely investigated for developing ASDs since the thermal exposure time of the formulations is significantly less compared with the hot melt extrusion process. KinetisolĂ’ can be employed for developing ASDs of thermolabile drug substances. Another solvent-based technique, electrospinning, is also employed for developing nanofibers-based ASD. However, much research is warranted for the electrospinning process before implementing it in commercial manufacturing. Various critical factors such as drug-polymer solubility, the solubility of the drug in the polymer, drug-polymer interactions, type of manufacturing process, and storage conditions need to be considered for developing a stable and robust ASD formulation. This review mainly focuses on the most advanced manufacturing technologies of ASDs, namely HME, spray drying, KinetisolĂ’, and electrospinning, along with a note on the various critical factors that affect the stability of ASD formulations. Keywords: amorphous solid dispersions; hot melt extrusion; spray drying; KinetisolĂ’; electrospinning          &nbsp
    corecore